-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Added Google T5 model support to vLLM #11780
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Thanks for implementing this! cc @afeldman-nm It looks like this implementation doesn't use the custom attention bias ( |
736f0e6
to
36f7a01
Compare
Thanks for pointing out about custom attention bias @DarkLight1337 . This seems like a more extensive effort, and I would like to put up a separate PR for this. |
Signed-off-by: Baishali <[email protected]>
Signed-off-by: Baishali <[email protected]>
Signed-off-by: Baishali <[email protected]>
68f9b64
to
d2cc40a
Compare
Signed-off-by: Baishali <[email protected]>
@mgoin @robertgshaw2-redhat please provide your reviews. Thanks. |
Hi Team, I am trying to debug the failed entrypoint test. I tried to recreate this pytest locally however I am unable to. Please let me know how I should setup the env and any guidance so as to recreate it locally and debug this error. |
@@ -157,7 +157,7 @@ def _prepare_decoder_input_ids_for_generation( | |||
if decoder_input_ids is None: | |||
# no decoder prompt input -> | |||
# use decoder_start_token_id as decoder_input_ids | |||
decoder_input_ids = self._get_default_enc_dec_decoder_prompt() | |||
decoder_input_ids = [decoder_start_token_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it has something to do with this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Could you provide guidance as to how I can recreate this error locally? Thanks. Currently I am running pytest command but am getting error related to env issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self._get_default_enc_dec_decoder_prompt()
returns bos_token_id
rather than decoder_start_token_id
@afeldman-nm @robertgshaw2-redhat do you know why that's the case despite the code comments here?
@NickLucche also has a couple of pending PRs in support of this: |
@njhill thanks for pointing this out. I was hoping the reviewers can coordinate this. Please let me know a feasible next step to avoid redundancy between the 2 set of PRs. Thanks. |
attn_output = F.scaled_dot_product_attention(q, | ||
k, | ||
v, | ||
dropout_p=0.0) | ||
else: | ||
qkv_enc, _ = self.qkv_proj(encoder_hidden_states) | ||
_, k, v = qkv.split([2048, 2048, 2048], dim=-1) | ||
attn_output = F.scaled_dot_product_attention(q, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can't just call torch to compute the attention scores for us, because we wouldn't have any of the features we work so hard for here (kv cache, paged attn,hw tuned kernels..). Did you take a look at https://docs.vllm.ai/en/latest/contributing/model/basic.html#initialization-code?
k, | ||
v, | ||
dropout_p=0.0) | ||
output, _ = self.out_proj(attn_output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As @njhill already pointed out, there's no attention bias added here, which is really the main difference of t5. Having no positional encoding, it can't tell tokens positions in the sequence.
Hey thanks for the effort! Unfortunately, current implementation is quite far from being complete as it's not integrating with any of the supported attention backends as well as not having any positional encoding system. The model shouldn't be able to load (as the relative pos encoding proj matrix is discarded) and even so, I would be surprised f the generated tokens made any sense (no positioning info). |
This PR adds t5 model integration scripts to add support for this model in vLLM repository.
I have tested the results locally both in Nvidia H100 gpus as well as MI300X GPUs using rocm/vllm repository.